Talk given by Jonathan Victor, Cornell University on May 27, 2009. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley.
A central problem in systems neuroscience is to understand the nature of cortical computations, and how they are implemented. Primary visual cortex is an excellent model system for addressing these questions, since its inputs are readily controlled and its anatomy is well-understood. Most studies have suggested that neurons of primary visual cortex can be modeled as a bank of feedforward filters and simple nonlinearities. However, here we present evidence of widespread and dramatic differences between the computations performed by real cortical neurons and computations of models based on a feedforward cascade. These differences suggest that a strongly recurrent network is an appropriate basic framework for understanding cortical computations.
June 28, 2009 Subject:
V1: Hermite functions and infinitesimal windows instead of Gabor
The lecture is a good effort to create a synthesis about computations in V1 and propose a new model. The lecture starts comparing the architecture of the retina with V1 and explaining the linear equation (linear addition in time and space) for ganglion cells and introducing the nonlinearity ("the tweak"). V1 does not have bottleneck as the retina and it does not have the need for redundancy reduction. Then a new stimulus set is proposed, not based on minimization of spread in space and frequency (i.e. Gabor functions) but on 2D-Hermite functions (which allow factorize the gain control). The traditional view -based on minimization and in the stimulus that try to target some features of the model- is replaced by other based on "confinement", although the reasons for that change are not completely clear.
Two basic sets are explained: Cartesian (rectangular grid) and polar (annular or circular patterns). An additive model (linear +rectification) is used for neuronal operation. The model is tested based on the following prediction: The filters for each set should be similar. This prediction fails in some cells but is correct in others. The conclusion suggests that V1 can not be modeled properly with cascade models... the degree of long-range feedback in V1 suggests that recurrent nonlinear processing has to be modeled more explicitly.
In the last part, random binary stimulus is suggested to map V1 neurons with correlated inputs.
The lecture is clear, dynamic and interesting and the audience is invited to participate actively. Unfortunately, the participant's voice is not audible. The conclusion is reasonable. The need for this approach is however questioned very briefly in a slide for the lecturer itself: "Can we really distinguish a recurrent network from multiple parallel cascades?" True. See Synfire chains, for instance. The particular conditions at which this model can work better should then be specified. Juan F Gomez-M Ph.D.